186 research outputs found

    Algorithm Diversity for Resilient Systems

    Full text link
    Diversity can significantly increase the resilience of systems, by reducing the prevalence of shared vulnerabilities and making vulnerabilities harder to exploit. Work on software diversity for security typically creates variants of a program using low-level code transformations. This paper is the first to study algorithm diversity for resilience. We first describe how a method based on high-level invariants and systematic incrementalization can be used to create algorithm variants. Executing multiple variants in parallel and comparing their outputs provides greater resilience than executing one variant. To prevent different parallel schedules from causing variants' behaviors to diverge, we present a synchronized execution algorithm for DistAlgo, an extension of Python for high-level, precise, executable specifications of distributed algorithms. We propose static and dynamic metrics for measuring diversity. An experimental evaluation of algorithm diversity combined with implementation-level diversity for several sequential algorithms and distributed algorithms shows the benefits of algorithm diversity

    Software fault-tolerance by design diversity DEDIX: A tool for experiments

    Get PDF
    The use of multiple versions of a computer program, independently designed from a common specification, to reduce the effects of an error is discussed. If these versions are designed by independent programming teams, it is expected that a fault in one version will not have the same behavior as any fault in the other versions. Since the errors in the output of the versions are different and uncorrelated, it is possible to run the versions concurrently, cross-check their results at prespecified points, and mask errors. A DEsign DIversity eXperiments (DEDIX) testbed was implemented to study the influence of common mode errors which can result in a failure of the entire system. The layered design of DEDIX and its decision algorithm are described

    Event-B Patterns for Specifying Fault-Tolerance in Multi-Agent Interaction

    No full text
    Interaction in a multi-agent system is susceptible to failure. A rigorous development of a multi-agent system must include the treatment of fault-tolerance of agent interactions for the agents to be able to continue to function independently. Patterns can be used to capture fault-tolerance techniques. A set of modelling patterns is presented that specify fault-tolerance in Event-B specifications of multi-agent interactions. The purpose of these patterns is to capture common modelling structures for distributed agent interaction in a form that is re-usable on other related developments. The patterns have been applied to a case study of the contract net interaction protocol

    Automated Model-based Attack Tree Analysis using HiP-HOPS

    Get PDF
    As Cyber-Physical Systems (CPS) grow increasingly complex and interact with external CPS, system security remains a nontrivial challenge that continues to scale accordingly, with potentially devastating consequences if left unchecked. While there is a significant body of work on system security found in industry practice, manual diagnosis of security vulnerabilities is still widely applied. Such approaches are typically resource-intensive, scale poorly and introduce additional risk due to human error. In this paper, a model-based approach for Security Attack Tree analysis using the HiP-HOPS dependability analysis tool is presented. The approach is demonstrated within the context of a simple web-based medical application to automatically generate attack trees, encapsulated as Digital Dependability Identities (DDIs), for offline security analysis. The paper goes on to present how the produced DDIs can be used to approach security maintenance, identifying security capabilities and controls to counter diagnosed vulnerabilities

    Identifying Safety and Human Factors Issues in Rail using IRIS and CAIRIS

    Get PDF
    Abstract. Security, safety and human factors engineering techniques are largely disconnected although the concepts are interlinked. We present a tool-supported approach based on the Integrating Requirements and Information Security (IRIS) framework using Computer Aided Integration of Requirements and Information Security (CAIRIS) platform to identify the safety and human factors issues in rail. We illustrate this approach with a case study, which provides a vehicle for increasing the existing collaboration between engineers in security, safety and human factors
    corecore